由于标记数据稀缺,提高概括是音频分类中的主要挑战。自我监督的学习(SSL)方法通过利用未标记的数据来学习下游分类任务的有用功能来解决这一点。在这项工作中,我们提出了一个增强的对比SSL框架,以从未标记数据学习不变的表示。我们的方法将各种扰动应用于未标记的输入数据,并利用对比学学习,以便在这种扰动中学习鲁棒性。Audioset和Desed数据集上的实验结果表明,我们的框架显着优于最先进的SSL和Sound / Event分类任务的监督学习方法。
translated by 谷歌翻译
A long-standing goal of machine-learning-based protein engineering is to accelerate the discovery of novel mutations that improve the function of a known protein. We introduce a sampling framework for evolving proteins in silico that supports mixing and matching a variety of unsupervised models, such as protein language models, and supervised models that predict protein function from sequence. By composing these models, we aim to improve our ability to evaluate unseen mutations and constrain search to regions of sequence space likely to contain functional proteins. Our framework achieves this without any model fine-tuning or re-training by constructing a product of experts distribution directly in discrete protein space. Instead of resorting to brute force search or random sampling, which is typical of classic directed evolution, we introduce a fast MCMC sampler that uses gradients to propose promising mutations. We conduct in silico directed evolution experiments on wide fitness landscapes and across a range of different pre-trained unsupervised models, including a 650M parameter protein language model. Our results demonstrate an ability to efficiently discover variants with high evolutionary likelihood as well as estimated activity multiple mutations away from a wild type protein, suggesting our sampler provides a practical and effective new paradigm for machine-learning-based protein engineering.
translated by 谷歌翻译
我们提出了一种新的方法,可以在点云对之间进行无监督的形状对应学习。我们首次尝试适应经典的局部线性嵌入算法(LLE)(最初是为非线性维度降低)的形状对应关系的。关键思想是通过首先获得低维点云的高维邻域保护嵌入,然后使用局部线性转换对源和目标嵌入对齐,从而找到形状之间的密集对应。我们证明,使用新的LLE启发的点云重建目标学习嵌入会产生准确的形状对应关系。更具体地说,该方法包括一个端到端的可学习框架,该框架是提取高维邻域保护的嵌入,估算嵌入空间中的局部线性变换,以及通过基于差异测量的构建构建的概率密度函数的对准形状,并重建形状。目标形状。我们的方法强制将形状的嵌入在对应中,以放置在相同的通用/规范嵌入空间中,最终有助于正规化学习过程,并导致形状嵌入之间的简单最近的邻居接近以找到可靠的对应关系。全面的实验表明,新方法对涵盖人类和非人类形状的标准形状信号基准数据集进行了明显的改进。
translated by 谷歌翻译
乳腺癌是全球女性中最常见的癌症。乳腺癌的早期诊断可以显着提高治疗效率。由于其可靠性,准确性和负担能力,计算机辅助诊断(CAD)系统被广泛采用。乳腺癌诊断有不同的成像技术。本文使用的最准确的是组织病理学。深度传输学习被用作提议的CAD系统功能提取器的主要思想。尽管在这项研究中已经测试了16个不同的预训练网络,但我们的主要重点是分类阶段。在所有测试的CNN中,具有剩余网络既有剩余网络既有剩余和启动网络的启发能力,均显示出最佳的特征提取能力。在分类阶段,Catboost,XGBOOST和LIGHTGBM的合奏提供了最佳的平均精度。 Breakhis数据集用于评估所提出的方法。 Breakhis在四个放大因素中包含7909个组织病理学图像(2,480个良性和5,429个恶性)。提出的方法的准确性(IRV2-CXL)使用70%的Breakhis数据集作为40倍,100X,200X和400X放大倍率的训练数据分别为96.82%,95.84%,97.01%和96.15%。大多数关于自动乳腺癌检测的研究都集中在特征提取上,这使我们参加了分类阶段。 IRV2-CXL由于使用软投票集合方法而显示出更好或可比较的结果,该合奏方法可以将Catboost,XGBoost和LightGBM的优势结合在一起。
translated by 谷歌翻译
了解3D场景是自治代理的关键先决条件。最近,LIDAR和其他传感器已经以点云帧的时间序列形式提供了大量数据。在这项工作中,我们提出了一种新的问题 - 顺序场景流量估计(SSFE) - 该旨在预测给定序列中所有点云的3D场景流。这与先前研究的场景流程估计问题不同,这侧重于两个框架。我们介绍SPCM-NET架构,通过计算相邻点云之间的多尺度时空相关性,然后通过订单不变的复制单元计算多级时空相关性来解决这个问题。我们的实验评估证实,与仅使用两个框架相比,点云序列的复发处理导致SSFE明显更好。另外,我们证明可以有效地修改该方法,用于顺序点云预测(SPF),一种需要预测未来点云帧的相关问题。我们的实验结果是使用SSFE和SPF的新基准进行评估,包括合成和实时数据集。以前,场景流估计的数据集仅限于两个帧。我们为这些数据集提供非琐碎的扩展,用于多帧估计和预测。由于难以获得现实世界数据集的地面真理运动,我们使用自我监督的培训和评估指标。我们认为,该基准将在该领域的未来研究中关键。将可访问基准和型号的所有代码。
translated by 谷歌翻译
This paper presents a novel technique based on gradient boosting to train the final layers of a neural network (NN). Gradient boosting is an additive expansion algorithm in which a series of models are trained sequentially to approximate a given function. A neural network can also be seen as an additive expansion where the scalar product of the responses of the last hidden layer and its weights provide the final output of the network. Instead of training the network as a whole, the proposed algorithm trains the network sequentially in $T$ steps. First, the bias term of the network is initialized with a constant approximation that minimizes the average loss of the data. Then, at each step, a portion of the network, composed of $J$ neurons, is trained to approximate the pseudo-residuals on the training data computed from the previous iterations. Finally, the $T$ partial models and bias are integrated as a single NN with $T \times J$ neurons in the hidden layer. Extensive experiments in classification and regression tasks, as well as in combination with deep neural networks, are carried out showing a competitive generalization performance with respect to neural networks trained with different standard solvers, such as Adam, L-BFGS, SGD and deep models. Furthermore, we show that the proposed method design permits to switch off a number of hidden units during test (the units that were last trained) without a significant reduction of its generalization ability. This permits the adaptation of the model to different classification speed requirements on the fly.
translated by 谷歌翻译